Gradient Methods with Adaptive Step-Sizes

نویسندگان

  • Bin Zhou
  • Li Gao
  • Yu-Hong Dai
چکیده

Motivated by the superlinear behavior of the Barzilai-Borwein (BB) method for two-dimensional quadratics, we propose two gradient methods which adaptively choose a small step-size or a large step-size at each iteration. The small step-size is primarily used to induce a favorable descent direction for the next iteration, while the large step-size is primarily used to produce a sufficient reduction. Although the new algorithms are still linearly convergent in the quadratic case, numerical experiments on some typical test problems indicate that they compare favorably with the BB method and some other efficient gradient methods.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Adaptive Quasi-Newton Methods for Minimizing Expected Values

We propose a novel class of stochastic, adaptive methods for minimizing self-concordant functions which can be expressed as an expected value. These methods generate an estimate of the true objective function by taking the empirical mean over a sample drawn at each step, making the problem tractable. The use of adaptive step sizes eliminates the need for the user to supply a step size. Methods ...

متن کامل

Online Learning with Adaptive Local Step Sizes

Almeida et al. have recently proposed online algorithms for local step size adaptation in nonlinear systems trained by gradient descent. Here we develop an alternative to their approach by extending Sutton’s work on linear systems to the general, nonlinear case. The resulting algorithms are computationally little more expensive than other acceleration techniques, do not assume statistical indep...

متن کامل

Algorithms and networks for accelerated convergence of adaptive LDA

We introduce and discuss new accelerated algorithms for linear discriminant analysis (LDA) in unimodal multiclass Gaussian data. These algorithms use a variable step size, optimally computed in each iteration using (i) the steepest descent, (ii) conjugate direction, and (iii) Newton–Raphson methods in order to accelerate the convergence of the algorithm. Current adaptive methods based on the gr...

متن کامل

Energy Conservation and the Learning Ability of LMS Adaptive Filters

This chapter provides an overview of interesting phenomena pertaining to the learning capabilities of stochastic-gradient adaptive filters, and in particular those of the least-mean-squares (LMS) algorithm. The phenomena indicate that the learning behavior of adaptive filters is more sophisticated, and also more favorable, than was previously thought, especially for larger step-sizes. The discu...

متن کامل

Online adaptive forecasting of a locally stationary time varying autoregressive process

In this work, we study the problem of online adaptive forecasting for locally stationary Time Varying Autoregressive processes (TVAR). The Normalized Mean Least Squares algorithm (NMLS) is an online stochastic gradient method which has been shown to perform efficiently, provided that the gradient step size is well chosen. This choice highly depends on the smoothness exponent of the evolving par...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Comp. Opt. and Appl.

دوره 35  شماره 

صفحات  -

تاریخ انتشار 2006